我们提出了一种自动方法,以根据从视频中提取的面部标志来估算自我报告的疼痛。对于每个视频序列,我们将面部分解为四个不同的区域,并通过使用这些区域的地标对面部运动的动态进行建模来衡量疼痛强度。基于革兰氏矩阵的公式用于代表固定等级的对称正极半明确矩阵Riemannian歧管上的地标轨迹。曲线拟合算法用于平滑轨迹,并执行时间对齐以计算歧管上的轨迹之间的相似性。然后对支持矢量回归分类器进行训练,以编码与自我报告的疼痛强度测量一致的疼痛强度水平。最后,进行每个区域的估计后期融合以获得最终的预测疼痛水平。提出的方法将在两个公开可用的数据集上进行评估,即UNBCMCMASTER肩部疼痛档案和Biovid热疼痛数据集。我们使用不同的测试协议将我们的方法与两个数据集的最新方法进行了比较,以显示所提出的方法的竞争力。
translated by 谷歌翻译
在这项工作中,我们解决了4D面部表情生成的问题。通常,通过对中性3D面动画来达到表达峰,然后回到中立状态来解决这一问题。但是,在现实世界中,人们表现出更复杂的表情,并从一个表达式转换为另一种表达。因此,我们提出了一个新模型,该模型在不同表达式之间产生过渡,并综合了长长的4D表达式。这涉及三个子问题:(i)建模表达式的时间动力学,(ii)它们之间的学习过渡,以及(iii)变形通用网格。我们建议使用一组3D地标的运动编码表达式的时间演变,我们学会通过训练一个具有歧管值的gan(Motion3dgan)来生成。为了允许生成组成的表达式,该模型接受两个编码起始和结尾表达式的标签。网格的最终顺序是由稀疏的2块网格解码器(S2D-DEC)生成的,该解码器将地标位移映射到已知网格拓扑的密集,每位vertex位移。通过明确处理运动轨迹,该模型完全独立于身份。五个公共数据集的广泛实验表明,我们提出的方法在以前的解决方案方面带来了重大改进,同时保留了良好的概括以看不见数据。
translated by 谷歌翻译
在本文中,我们解决了人类3D形状序列的比较和分类的任务。随着时间的推移,人类运动的非线性动力学和表面参数化的变化使这项任务非常具有挑战性。为了解决这个问题,我们建议将3D形状序列嵌入无限的尺寸空间,即Varifolds的空间,并具有来自给定的正定核的内部产品。更具体地说,我们的方法涉及两个步骤:1)表面表示为varifolds,该表示形式将指标等效到刚体运动,而不是参数化的不变性; 2)3D形状的序列由其无限尺寸Hankel矩阵得出的革兰氏矩阵表示。两个人类的两个3D序列的比较问题是作为两个革兰氏赫克矩阵的比较。关于CVSSP3D和DYNA数据集的广泛实验表明,我们的方法在3D人类序列运动检索中与最新的方法具有竞争力。实验代码可在https://github.com/cristal-3dsam/humancomparisonvarifolds上获得。
translated by 谷歌翻译
现有的多模式应力/疼痛识别方法通常独立地从不同模态中提取特征,因此忽略了交叉模式相关性。本文提出了一个新的几何框架,用于利用对称阳性定位(SPD)矩阵作为一种表示形式的多模式应力/疼痛检测,该代表结合了协方差和交叉稳定性的生理和行为信号的相关关系。考虑到SPD矩阵的Riemannian流形的非线性,众所周知的机器学习技术不适合对这些矩阵进行分类。因此,采用切线空间映射方法将派生的SPD矩阵序列映射到可将基于LSTM的网络用于分类的切线空间中的向量序列。提出的框架已在两个公共多模式数据集上进行了评估,这两者都取得了压力和疼痛检测任务的最新结果。
translated by 谷歌翻译
我们解决了人类反应生成的挑战性任务,该任务旨在基于输入动作产生相应的反应。大多数现有作品并不集中于产生和预测反应,并且在仅给出动作作为输入时就无法产生运动。为了解决这一限制,我们提出了一种新型的相互作用变压器(Interformer),该变压器由具有时间和空间浓度的变压器网络组成。具体而言,时间的注意力捕获了字符及其相互作用的运动的时间依赖性,而空间注意力则了解每个字符的不同身体部位与相互作用的一部分之间的依赖关系。此外,我们建议使用图形通过相互作用距离模块提高空间注意力的性能,以帮助关注两个字符的附近关节。关于SBU相互作用,K3HI和Duetdance数据集的广泛实验证明了Interformer的有效性。我们的方法是一般的,可用于产生更复杂和长期的相互作用。
translated by 谷歌翻译
The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed.
translated by 谷歌翻译
We introduce a machine-learning (ML)-based weather simulator--called "GraphCast"--which outperforms the most accurate deterministic operational medium-range weather forecasting system in the world, as well as all previous ML baselines. GraphCast is an autoregressive model, based on graph neural networks and a novel high-resolution multi-scale mesh representation, which we trained on historical weather data from the European Centre for Medium-Range Weather Forecasts (ECMWF)'s ERA5 reanalysis archive. It can make 10-day forecasts, at 6-hour time intervals, of five surface variables and six atmospheric variables, each at 37 vertical pressure levels, on a 0.25-degree latitude-longitude grid, which corresponds to roughly 25 x 25 kilometer resolution at the equator. Our results show GraphCast is more accurate than ECMWF's deterministic operational forecasting system, HRES, on 90.0% of the 2760 variable and lead time combinations we evaluated. GraphCast also outperforms the most accurate previous ML-based weather forecasting model on 99.2% of the 252 targets it reported. GraphCast can generate a 10-day forecast (35 gigabytes of data) in under 60 seconds on Cloud TPU v4 hardware. Unlike traditional forecasting methods, ML-based forecasting scales well with data: by training on bigger, higher quality, and more recent data, the skill of the forecasts can improve. Together these results represent a key step forward in complementing and improving weather modeling with ML, open new opportunities for fast, accurate forecasting, and help realize the promise of ML-based simulation in the physical sciences.
translated by 谷歌翻译
Low Earth Orbit (LEO) constellations, each comprising a large number of satellites, have become a new source of big data "from the sky". Downloading such data to a ground station (GS) for big data analytics demands very high bandwidth and involves large propagation delays. Federated Learning (FL) offers a promising solution because it allows data to stay in-situ (never leaving satellites) and it only needs to transmit machine learning model parameters (trained on the satellites' data). However, the conventional, synchronous FL process can take several days to train a single FL model in the context of satellite communication (Satcom), due to a bottleneck caused by straggler satellites. In this paper, we propose an asynchronous FL framework for LEO constellations called AsyncFLEO to improve FL efficiency in Satcom. Not only does AsynFLEO address the bottleneck (idle waiting) in synchronous FL, but it also solves the issue of model staleness caused by straggler satellites. AsyncFLEO utilizes high-altitude platforms (HAPs) positioned "in the sky" as parameter servers, and consists of three technical components: (1) a ring-of-stars communication topology, (2) a model propagation algorithm, and (3) a model aggregation algorithm with satellite grouping and staleness discounting. Our extensive evaluation with both IID and non-IID data shows that AsyncFLEO outperforms the state of the art by a large margin, cutting down convergence delay by 22 times and increasing accuracy by 40%.
translated by 谷歌翻译
A Complete Computer vision system can be divided into two main categories: detection and classification. The Lane detection algorithm is a part of the computer vision detection category and has been applied in autonomous driving and smart vehicle systems. The lane detection system is responsible for lane marking in a complex road environment. At the same time, lane detection plays a crucial role in the warning system for a car when departs the lane. The implemented lane detection algorithm is mainly divided into two steps: edge detection and line detection. In this paper, we will compare the state-of-the-art implementation performance obtained with both FPGA and GPU to evaluate the trade-off for latency, power consumption, and utilization. Our comparison emphasises the advantages and disadvantages of the two systems.
translated by 谷歌翻译
Automatic segmentation is essential for the brain tumor diagnosis, disease prognosis, and follow-up therapy of patients with gliomas. Still, accurate detection of gliomas and their sub-regions in multimodal MRI is very challenging due to the variety of scanners and imaging protocols. Over the last years, the BraTS Challenge has provided a large number of multi-institutional MRI scans as a benchmark for glioma segmentation algorithms. This paper describes our contribution to the BraTS 2022 Continuous Evaluation challenge. We propose a new ensemble of multiple deep learning frameworks namely, DeepSeg, nnU-Net, and DeepSCAN for automatic glioma boundaries detection in pre-operative MRI. It is worth noting that our ensemble models took first place in the final evaluation on the BraTS testing dataset with Dice scores of 0.9294, 0.8788, and 0.8803, and Hausdorf distance of 5.23, 13.54, and 12.05, for the whole tumor, tumor core, and enhancing tumor, respectively. Furthermore, the proposed ensemble method ranked first in the final ranking on another unseen test dataset, namely Sub-Saharan Africa dataset, achieving mean Dice scores of 0.9737, 0.9593, and 0.9022, and HD95 of 2.66, 1.72, 3.32 for the whole tumor, tumor core, and enhancing tumor, respectively. The docker image for the winning submission is publicly available at (https://hub.docker.com/r/razeineldin/camed22).
translated by 谷歌翻译